The Polygraph Place

Thanks for stopping by our bulletin board.
Please take just a moment to register so you can post your own questions
and reply to topics. It is free and takes only a minute to register. Just click on the register link


  Polygraph Place Bulletin Board
  Professional Issues - Private Forum for Examiners ONLY
  Utah PLT Test

Post New Topic  Post A Reply
profile | register | preferences | faq | search

next newest topic | next oldest topic
Author Topic:   Utah PLT Test
LSUPoly
Member
posted 11-03-2004 06:43 AM     Click Here to See the Profile for LSUPoly   Click Here to Email LSUPoly     Edit/Delete Message
I just returned from the Alabama Association of Polygraph Examiners siminar. Don Krapohl spoke to us about the Utah Probable LieTest and the accuracy of this test. The test is described in the Handbook of Polygraph Testing By Murray Kleiner. Can someone clarify the numerical scoring of this test for me? My interpretation is the result is obtained through the grand total scores. My question is do you also use the spot score for a DI as in the Backster Zone? Thanks for any assistance.
Ben

IP: Logged

J L Ogilvie
Moderator
posted 11-03-2004 09:13 AM     Click Here to See the Profile for J L Ogilvie   Click Here to Email J L Ogilvie     Edit/Delete Message
If you are referring to Raskin's "Utah Modified Comparison Question" test; it is a seven point scale with an overall score.

You could have a -3 on one relevant question and still have a truthful out come if the other relevant questions added up to +9 or more. Total scores of all relevants together have to be at least + or -6 to be a conclusive result. Anything in between is inconclusive.

He does have strict scoring guide lines to use with this technique.

It is a very accurate and effective test although I don't care for the 4 question version because of the tendency to use it incorrectly.

Hope this helps and goodluck, Jack

------------------

IP: Logged

LSUPoly
Member
posted 11-03-2004 09:51 AM     Click Here to See the Profile for LSUPoly   Click Here to Email LSUPoly     Edit/Delete Message
That is the test I was refering too. Thank you for the clarification. I just used it this morning and I like it.
Ben

IP: Logged

Barry C
Member
posted 11-04-2004 08:04 AM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
I like the test too. What's nice is you can use DLCs too if the situation calls for it. (Those might be preferred in an IA case in which you fear admissions to the PLCs.) The APA's 1999 special edition of POLYGRAPH spells out just how to score the tests. It's an excellent resource to have, and it is still current.

Remember, the UTAH test only uses the 10 criteria Don is now teaching. Honts et al do not agree with Backster's scoring rules and methods, and you shouldn't be using those scoring rules on the UTAH test if you want to stay safely within the bounds of their supporting research. (And, as I'm sure Don told you, they have a lot of it.)

IP: Logged

Barry C
Member
posted 11-04-2004 08:06 AM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
I forgot to mention: they do use spot scores (+/- 3) on multi-facet (exploratory) tests, but not on specific-issue tests as described above.

IP: Logged

sackett
Moderator
posted 02-25-2005 05:50 AM     Click Here to See the Profile for sackett   Click Here to Email sackett     Edit/Delete Message
Barry C,

thanks for the input and advise on the open site. I was aware this topic had been discussed, so I brought it back up for discussion. I wasn't aware of this technique's scoring methodology.

While I am not a researcher or academic (by any means), I do have a problem with any test that can have sgnificant/consistent response at any given relevant question during the exam and still have potential for an overall "truthful" outcome or result.

Anyone else have an opinion on this, I'm just curious?

Jim

IP: Logged

Capstun
Member
posted 02-25-2005 07:00 AM     Click Here to See the Profile for Capstun     Edit/Delete Message
I feel the same as Jim. I was recently sent an exam administered by a private examiner who used the modified Utah format. It was a criminal case and the examiner scored the subject truthful on a criminal exam with 4 relevants, all seperate issues. I scored it deceptive on one relevant issue and inconclusive overall leaning toward deceptive, using Backster. I didn't like the entire format, so I sent it to DODPI for an informal opinion. They didn't like it either and do not consider the format acceptable. Another private examiner who uses the Utah format told me DODPI tore her test apart after it was sent there for a formal review concerning a federal criminal case. She stated DODPI said the test was
"S#*&%" (her translation of the critique I'm sure, not DODPI's exact words).

I am by no means an expert on the Utah format and I don't use it. But, I am doing 1-2 stipulated examinations a week on criminal cases and I would not want to testify in court using a format that DODPI considers invalid and would testify against me. I would never score a chart truthful when I have consistant and significant responses on one relevant.

As a side note, on the exam I reviewed, I sent the question list and format to two other highly respected examiners (nationally and one internationally) and both rejected the format as valid.

It is highly likely that the format is not what Krapohl is promoting, although it sure looked like what he was talking about in the 1999 Polygraph article. DODPI advised the format I sent them was what Honts is now promoting, but they do not recognize as valid.

Just food for thought.

-JW

IP: Logged

Taylor
Member
posted 02-25-2005 08:37 AM     Click Here to See the Profile for Taylor   Click Here to Email Taylor     Edit/Delete Message
I use the Utah Zone (3 relevant question)all the time and really like it with my PSCOT exams. I run at least 3 charts and I usually score with a 3 point system. When the relevant questions are similar but not defined as a 'specific' issue, you actually score vertically only. I really like the way you move the comparison questions on each exam. This way you can tell if it is an effective comparison. Anyway, that is my experience. I have some written material (from Tom Ezell) I could send you if you would like. Taylor

IP: Logged

Ted Todd
Member
posted 02-25-2005 09:30 AM     Click Here to See the Profile for Ted Todd     Edit/Delete Message
I agree with Jim. If you have multiple issues on the same test how can you score the test on an overall basis? If you have a consistent response to any of the relevants, I would more than likely do a SI test on just that relevant.

Ted

[This message has been edited by Ted Todd (edited 02-25-2005).]

IP: Logged

sackett
Moderator
posted 02-25-2005 09:42 AM     Click Here to See the Profile for sackett   Click Here to Email sackett     Edit/Delete Message
Ted,

yep! Although I'm not Backster trained, I would suggest that was the purpose of his SKY series. ID a potential area, then break it out on a specific issue exam.

Jim

IP: Logged

Barry C
Member
posted 02-25-2005 11:20 AM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
I find it hard to believe that DoDPI thinks the Utah test is no good, as the Utah guys have done a lot of the DoDPI research.

I think we do ourselves harm when we say a technique is "valid" because I ask my questions in a certain order, but if I leave out an irrelevant the technique is somehow invalid. For example, the research on outside issue questions is clear: they don't work. In addition, there is some evidence they reduce the scores of the innocent, but do nothing for the guilty.

If I remove those questions from my Bi-zone test (my test of choice), then is the test invalid? Somw would say yes, but that's not very scientific. After all, a Bi-Zone witout the symptomatics is an abbreviated AFMGQT, which we would all consider "valid." (And, the AFMGQT is a multi-issue or single-issue test, and you can use 2, 3, or 4 RQs.)

It's time we recognize (as the Utah folks do), there are certain principles we must follow to make a test valid. The format of the questions is not that big a deal as long as it has a few well developed RQs and CQs, and gives the subject a chance to "settle in" so to speak with a neutral and a sac. rel.

In 2002 Don Krapohl was tasked by the APA to "...identify those methodologies that the APA could defend with science." He mentioned a "couple stumbling blocks on the road to this list of defensible techniques." One such stumbling block is the fact that "...virtually no polygraph technique remains static. Over time, all undergo revisions, alterations, additions, and revamping, some more than others."

Prior to creating the list he made this disclaimer: "As tasked, I am providing the following list of techniques that have had a body of replicated and peer-reviewed research that uses the name of the technique. For the reasons outlined above [in his letter, that is - I've not included them all], it cannot be considered the list of valid techniques, nor necessarily the best techniques."

The list is as follows:

Army MGQT
Backster single-issue ZCT
Federal single-issue ZCT
Reid single-issue ZCT
TES
Utah single-issue ZCT

And, he adds the two knowledge tests, the POT and CIT.

I'm going to repeat myself here by quoting him again:

"With regard to the issue of validity, a careful review of the polygraph literature has led me to conclude that testing techniques are secondary to testing principles. That is, it is more important from a validity standpoint to incorporate essential core practices than to take a fundamentalist approach to a given testing technique. There simply isn't any good evidence that any one of the many techniques in common use is more accurate than another, and polygraphy appears to have survived the variations over the years." He goes on to say, "...differences in techniques probably amount to nothing more than style."

I am an admitted geek. I've read a considerable amount of the literature, and I can't disagree with him on any of what he says.

I too have some trouble with the idea of looking at the score as a whole, ignoring a negative spot total, but again the data is against intuition (in single-issue tests). Look at OSS for example. It outperforms hand scorers, and it only looks at a single total regardless of spots. Don is working on an article now (I think with Stu Senter) on optimal scoring procedures. If I remember correctly Stu published an article in polygraph supporting only looking at total scores for a decision. Only after a total score is INC, would one look to spot scores for a DI decision.

The Utah multi-issue test scoring is a little odd, but they are asking the question, "Which questions are more salient?" If each question gets a +2 total, many would call all questions INC. Utah would give the test a +8 and call the person NDI to all. They base that scoring system on their research. How extensive that is I don't know. (It is very extensive for single-issue tests. The Utah test is probably the most researched test out there, and there are several "versions" as the Utah guys believe in principles, not strict adherence to formats.

From a scientific standpoint, one could argue a federal ZCT in which the examiner adds even one extra neutral to get a person back to baseline deviates from the "validation" studies and could be considered invalid. After all, how many validity studies have been conducted for tests in which a neutral is tossed in between question 7 (R) and 8 (SYM / OI)? What about repeating a zone within a chart because of an error?

I hope this makes sense. I've been interrupted a dozen times, and I'm too lazy to proof read.

IP: Logged

sackett
Moderator
posted 02-25-2005 01:00 PM     Click Here to See the Profile for sackett   Click Here to Email sackett     Edit/Delete Message
Barry C: First let me say, I know you were explaining; NOT defending the discussion concerning the Utah test format and scoring and my response here is directed at the topic, not you or anyone else personally.
____________________________________________

Having said that, let me clarify my thoughts. The polygraph profession have spent much time and effort on establishing particular techniques (formats) which are considered valid through research and academic acceptance, (i.e. R and I, PLCQT, DLCQT, etc). While reinventing the original mousetrap, we as a community have, in some respects, lost sight that we’re still trying to simply catch mice…

Much, if any acceptance of new (or redeveloped/reintroduced) testing principles, techniques, formats or even ideas come simply with accolades by those involved in inventing it in the first place, followed by reviews of others who want to see if it works properly, as reported. [I won't even address the ego problem we have as a profession]

It’s sort of like the manufacturer’s coming out with new computerized polygraph machines with 37 annex and auxiliary channels. It sounds new and improved and everyone will want one, but to what usefulness!? Reaction to relevant questions, are still reactions to relevant questions! If using a CQ technique, compare it to something! If not, look for consistent and significant reaction and make the call. The collection of physiological data is now and has been confined to the basic 3 components. We should stick to that and not try to get fancy or cute, unless the research is there to support it.

When discussing the Utah test and scoring practice (as explained to me on the board) it flies in the face of all that I was taught, relative to psychological set, etc. You mentioned the Utah guys adhering to principles, not formats. Isn’t that what we’re supposed to be doing? Following formats and the principles will follow?

Regarding psychological set, has this principle been disproved, discontinued or severely modified as a researched basis of polygraph principles/theory? If not, then spot scoring separately is inappropriate andinvalid (regardless of pending research). If so, someone better tell the rest of us and be able to prove it through valid, reliable research.

Jim

IP: Logged

Barry C
Member
posted 02-25-2005 02:09 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Jim,

I don't understand what you mean with the psychological set question in the last paragraph.

First, let me say - as I have before - I am not sold on the theory of psychological set, and that is more because it is not so well defined by those of us in the profession. It means different things to different people.

Matte defines it as follows:

"Also known as Selective Attention, it is an adaptive psychophysiological response to fears, anxieties, and apprehensions with a selective focus on the particular issue or situation which presents the greatest threat to the legitimate security of the examinee while filtering out lesser threats."

From a scientific standpoint, it has never really been proven at all since "selective attention" is a real term found in the psychological literature referring to two simultaneous stimuli - not successive stimuli occurring 30 seconds apart (and then some). Each type of question (RQ and CQ) is focused on to some extent. The question is, how much focus? We assume if a person focuses on one question type over another, then he or she is truthful or deceptive, and we are right most of the time. (Now, if selective attention is a term we stole from the psychological community and redefinded - like "control" - then we should say so. But there is no sound scientific way to support selective attention in polygraph with research.)

Now with that said, I don't think it matters why a person focuses on one question type over another. (The OR and DR are two other theories.) We just know they do so consistently, which is why we can do what we do so well. (Now again, to straddle the fence, if psych set simply means one question type is more salient to the truthful and one to the deceptive, then I'm a believer.)

A deceptive response to one RQ does dampen others, which is why validity goes down and why "if you fail one question, you fail them all" is a good approach, and the Utah and CPC is odd. But, we do it all the time in screening exams. Currently, the best practice (and ASTM standard) is to run specific issue tests on any SR responses in the screening exam. So you can "pass" some and "fail" others in a screening exam, BUT at a chance of missing other issues.

I read a study on the TES, and I have always been troubled by it. (In the TES, two RQs are asked.) The study found that the deceptive sometimes reacted to, say RQ1, when they in fact were actually deceptive to RQ2, which they didn't react to.

When all is said and done, I agree with you for the most part. I just have a hard time when we call any technique "valid" and think it really means something. I just this week went throuth every case in the DoDPI confirmed case database (field cases) looking for Bi-zones for some research I'm doing (oddly enough to "validate" it and get some OSS cut-off scores), and I can tell you from that there are a lot of deviations from what we call a particular format. The bottom line is polygraph vigorous enough to handle the deviations and still be valid. (It is from that database that a lot of research comes from.)

And don't worry, I understand this is a healthy discussion - not a personal attack. Sometimes I'll say things I don't agree with just to stimulate the conversation, and I suspect others do as well. I enjoy the thought provoking input.

IP: Logged

sackett
Moderator
posted 02-26-2005 09:09 AM     Click Here to See the Profile for sackett   Click Here to Email sackett     Edit/Delete Message
Barry C,

Al I meant by the last paragraph was that if the theory has been discarded, someone should tell us (in the field) why and what took it's place. If not, then separately scoring RQ's during an exam with a potential that an individual can go NDI to some RQ's and DI to others, with numerical scoring (summing up) potentially allowing an NDI; something is wrong!

I agree psych set is and should be between RQ and CQ's only. My point was that with the Utah scoring (as described), an examiner was able to distinguish overall truthfulness even though specific relevant questions are significant and reactive.

If, given the Utah scoring discussion we were having, an individual; oh, let's say, a hardened gangbanger (likes those I periodically deal with) comes in on a robbery.

Issue: Armed robbery of cash from a drug dealer.

So, I run a MGQT with the following:

3. Did you agree with anyone to steal..
5. DY steal...
8. Were you present when the $ was stolen
9. Were you inside a grey cadillac on Thursday PM (gettaway car)

In theory, if this was really a "robbery" between dope dealers, where the robber was really recovering money from a past deal gone bad, then the issue of stealing is a non-issue in the bad guys mind. Meaning, to him, this is not stealing, this is getting his money back by force. The fact that he used his grandmother's car (grey cadillac) could be the most disturbing issue since she is the most important person to him and he knows this type of activity would hurt her. So we run the test, he goes

+3, +2, +5, -3

He's NDI???

Next subject: Regarding PE testing that is not CQ technique, no pass/fail test can be be run, since multiple issues can weigh variously on the conscience of the examinee. Significant Reponse (like on the R and I) to specific questions can be reported though. Supporting this idea is, his theft from work 6 months ago (significant), could get overshadowed by his obtaining an escort last night (less so)...

Your thoughts?

Jim

P.S. I agree. Sometimes it is necessary to "stir the pot" to stimulate conversation.

IP: Logged

Barry C
Member
posted 02-26-2005 11:56 AM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Jim,

I agree with you, and your example shows why. As far as psych set goes, I don't think the Utah guys ever really accepted it at all, so they didn't really replace it at all - they discarded it from the get go because it's not good science. They will refer to it - as I and it looks like you will - to describe the difference in salience between one question type and another, i.e., RQ v. CQ. But, your example points out one (relevant) question can be hit harder than others, dampening the others to the point at which we aren't willing to call the guy NDI to any, period. After all, he might be lying to 3, 3 or all of them, but his energy went to one for some reason.

For the record, I have never run a Utah multi-issue test, and you cite the exact reason why. I would call the guy in your example DI and interrogate on everything. If I stuck to their rules, you're right, he'd be NDI and an interrogation would (soon) violate ASTM standards. On the other hand, the Utah single-issue test is one of the most researched tests out there, and I have used it. (I like it becasue it runs a N RQ CQ series, and I bar the Ns to confuse those who might try CMs.)

When we get back t what's "valid" and what isn't, it leads to many questions when just sticking to hard and fast formats. For example, the Utah test is valid - without question. An abbreviated AFMGQT doesn't make Krapohl's list of valid because it lacks the requisite number of peer-reviewed studies. But, if I abbreviate it and run it as a single-issue test (which is an accepted practice, it looks almost identical to Hont's version of the Utah test, save a few neutral questions. There is no technique that doesn't allow the addition of extra NQs, so if I were to add them, the AFMGQT would be as "valid" as a Utah test. (I suppose it would be a Utah test, even though it started - and still would be to me - an AFMGQT. Confusing, isn't it?). But, if I didn't add the neutrals, would it be less valid? I say no, and the reason is I would have set up the test correctly with a good pre-test, establishing the examinee's psych set towards some question type I'd soon learn, and then I'd run the charts and score them appropriately. Most of what makes a test "valid" is other than the order of the questions, i.e., the pre-test, good RQs and CQs, good running of charts and asking questions, proper scoring methods, etc. I think you'd agree any bozo can plug questions into a format -even good questions, but why are some examiners better than others when it comes to correctly calling cases? It can't be the format alone or even in great part.

I think Gordon Barland has said there are at least 14 different theories as to why polygraph works, the most common is fear of detection from which psych set flows, but nobody really knows. It's easier for us to stick to one thing than getted bogged down in the others since most of us don't have the time or the backgrounds to understand them all, and why would we since what we have works? Many of the DoDPI studies don't approach their research from a psych set standpoint, some take other approaches.

Regarding R/I PE screening tests. They are OK by ASTM standards (and Krapohl's best practices articles that ran in POLYGRAPH last year). No one is called DI based on SR responses to an R/I test. To do so the SR question must be plugged in to a CQ test. If the person passes he's NDI or NSR, and if he fails, he's DI on that issue. The problem arises when the examinee hits so hard on one RQ that he doens't react to others that he is also lying to. If the one he hits on the hardest due to some other reason (fear of being misbelieved, shame, etc.) dampens the question(s) he's lying on, then they would be missed. As a matter of fact there is a study out there demonstrating that can and does happen, and the discussion section of the study warns that the practice of breaking out SR questions might not work like we hope it will, but it's the best we have at this point. (I think it was a Raskin / DoDPI study on multi-issue v single-issue tests. I know it's available on the government web sight.)

Now, the Canadian guys haven't a leg to stand on becasue they teach - and only know- psych set. Apparently it only applies to their A-series test, because on the B-series test they do what Utah does, e.g., DI, NDI, NDI. (The A-series is a 3RQ and 3CQ single-issue test, the B-series is the same test format, but multi-issue.) I can't explain that one.

IP: Logged

Ted Todd
Member
posted 02-26-2005 05:42 PM     Click Here to See the Profile for Ted Todd     Edit/Delete Message

If I might interrupt Barry and Sackett here for a moment, I do have a question.

Barry said that when he researched the Bi-Zone tests on file at DoDPI, he found " a lot of deviations from what we call a particular format". I don't see this as a good thing.

Since I am a Backster
grad., "Standardization" has been etched into my brain. How can we have so many DoDPI tests on file under the heading of "Bi-zone" and have so many different formats?? Does a DoDPI "Bi-Zone" test not have have a set and accepted format? If so, are DoDPI trained examiners free to change the format?

And last, how can you use this data base loaded with different formats to do valid and reliable research on the topic of "Bi-zone" tests?

Don't get me wrong, I beleive in research and experimentation. It is what has brought our profession out of the Gypsy tents. I just think that if you are going to put anything into one catagory, they should all have the same parameters. Especially if we are going to rely on it.

Ted

IP: Logged

Barry C
Member
posted 02-26-2005 07:13 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Ted,

It wasn't the Bi-Zone that had the deviations. There were very few of those, and the DoDPI confirmed case database does not only contain tests from the feds: it includes some state, and I think, municipal PDs.

Cleve Backster is right in trying to get tests standardized, but as every scientist will tell you, it's not possible with a PLC test, which is why the anti-polygraph people (scientists) say what they do. The reason is the pre-test is not standardized from examiner to examiner.

The point is we need to standardize what works. The question is, what does work? Since the Backster You-phase and the Bi-Zone are essentially the same test, how can scoring to the greatest control in one (the Bi-Zone) and scoring to the weakest (Backster) both be valid? (Well, technically they can both be valid, but to different degrees. For example, polygraph as a whole is valid in that it is 90%+ accurate, and CVSA is valid in that it is 36% accurate. Odd isn't it? They'd be more accurate if they reversed their calls.)

Can you add an extra neutral to a test? How do you know adding one between question four and five doesn't invalidate the test? What about one between 5 and 6? There's no research out there addressing that specifically.

How come some people reverse the order of the SR and SYM in Backster tests? What is the standard, or are they both standard? My point is not to attack Backster tests - or any other for that matter. It is that YOU have a standard, and you should, but (legit) deviation from your standard does not necessarily invalidate a test.

Why is scoring PVC's part of some people's "standard" scoring when there is virtually no research to support they have any correlation with deception? Is that a "standard" we want to hold on to?

The same could be asked of the Virginia school's test. It is a DoDPI federal ZCT in format (they call it a "Modified Backster"), but they score question 5 only to 4, and they use a +5/-7 cut-off, and I don't think they use spot scores to make a DI call, but I could be wrong. How can the same test be scored so differently and still be considered standard to anyone outside looking in? (Because the test is taught at an APA accredited school, we consider it valid, and it is, but which is better?)

Even the Backster tests have evolved. When Matte came along with his inside/outside track, Backster allowed it in his tests, but not everyone runs them. Which test format is the standard format? Is there research saying it will work in both a Matte and Backster test? (I don't think any such research is really necessary. If it works in Matte's test - and it does - why wouldn't it in a Backster?)

By the way Tedd, do you run You-phase tests because I can use any confirmed tests you might be so kind as to send me. OSS doesn't care how the examiner scores, it's going to do it it's way, which it does better than most hand scorers with the three-question format.

(I asked Tom at the Backster school for tests, and he referred me to Jim Matte who told me he doesn't have the data, particularly in computerized format. I mention that because the research I'm doing would ultimately "validate" any form of a Bi-Zone / You-phase or whatever you want to call it.)

I am not advocating getting away from standards, just better identifying what are essential standards and what aren't.

Just who did start stirring this pot anyhow?

IP: Logged

Ted Todd
Member
posted 02-27-2005 09:01 AM     Click Here to See the Profile for Ted Todd     Edit/Delete Message
Barry,

Thanks for the clarification. I am currently sending all my confirmed tests to Bruce White at Axciton but I could certainly send a copy to you as well,

Ted

IP: Logged

Barry C
Member
posted 02-27-2005 02:27 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
I'd appreciate it. If I get enough data, I plan on creating another database to share with those who want it, so I'll keep him in mind. Maybe I'll give him a call and see what else he might have.

IP: Logged

Lieguy
Member
posted 03-09-2005 01:49 PM     Click Here to See the Profile for Lieguy   Click Here to Email Lieguy     Edit/Delete Message
Hi All;

God, I hate to wade into this "heavy" forum topic, but I have to say something here about testing formats and validity.

My basic complaint has already been registered here, that is, the "recognized" formats are constantly changing and evolving over time. Here's an example:

I originally went to Backster for my basic polygraph training. While there, Cleve "signed the deal" with Lafayette and started advocating their computerized polygraph and the computerized scoring system.

When I pointed out to him that none of the computerized algorythms scored data like Cleve advocates, he said "That's OK, the math will take care of itself."

At the same class, Dr. Jim Matte presented his new (then) book on Forensic PsychoPhysiology and he instructed us on his validated quadra-track system.

Cleve Backster approved of Matte's approach and the inclusion of the afraid/hoping set of questions.

Dr. Matte does not use Cleve's scoring system at all, he "goes to the left" when scoring. Cleve, as we all know, goes to a lack of reaction to get the score.

When I pointed out to Cleve that he was advocating Dr. Matte's system, which scores differently than the Backster system, Cleve again said "Don't worry about that, it will all work itself out in the math."

See my point? Here's Cleve Backster advocating using computerized scoring systems, which don't follow his scoring formula at all......then he's advocating using Matte's quadra-track formulation, which also doesn't follow Cleve's scoring methodology. So, what does it mean when we say we're using the Backster system?

Perhaps the situation was best described by Dr. Frank Horvath at a seminar I went to a few years back, when he told the audience "Research has clearly shown that the CQT is a very robust technique. No matter what 'system' is being taught, the method seems to work when we compare relevants to comparison questions. All of the rest is window dressing."

I have great faith in the many able researchers who strive to perfect our techniques, but I sometimes feel that we spend too much time criticizing each other's techniques and too little time improving our own.

For instance, how many polygraphers continually improve their interview and interrogation skills by taking advanced training? Yet, what does this whole polygraph thing come down to, anyway, if not for our ability to interact with people?

That said (I know I'm being long-winded here)I agree with Eric Holden when he states that the primary purpose of a polygraph is the utility value of the setting.

That is, does the entire testing process achieve the desired goal of determining whether a person is being deceptive and does that process lead to an admission of that deception.

For me, I work in the real world, I find out who is lying and then I get them to confess. Anything else I leave to the researchers. If they invent a better system, I'll start using it.

Chip Morgan

------------------
A Half Truth is a Whole Lie

IP: Logged

Barry C
Member
posted 03-09-2005 02:50 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Don't hate to wade in. That's the whole point of this forum.

You make great points, and in the process show how we seem to be our own worst enemies.

Dr. Horvath is correct, and that is what all the Utah guys are essentially saying: Polygraph IS robust enough to handle slight variations. As a matter of fact, Dr. Honts told me his chapter in Kleiner's book on polygraph validity applies to ALL CQT tests because these things we get hung up on (the window dressing), don't matter. His point was (and is) if you have a couple well-developed CQs and a couple well-developed RQs, it's a CQT and it's a valid test.

Just remember there is a difference between utility and validity, which might not matter in practice, but does in the court room.

IP: Logged

Ted Todd
Member
posted 03-09-2005 06:12 PM     Click Here to See the Profile for Ted Todd     Edit/Delete Message
Chip,

Where the HAAALE have you been? Welcome back.

Hey last time I heard Dr. Matte speak,which was within the last year, he was supporting scoring to the weakest control. He also sighted some research that showed this reduces false positives. I could be wrong about this but maybe this discussion will stir up a response from the good doctor!

Ted

IP: Logged

Barry C
Member
posted 03-10-2005 03:08 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
That is news to me. I'm not aware of any research in which scoring to the weakest control would result in fewer false positives. Clearly, doing so would result in moare negative scores, which would have the opposite effect, and there is a lot of research to support that. (If such research does exist, I'd like to see it, so if somebody knows where it is, please cite it.)

There is research to support asking an irrelevant question before a RQ also results in a more negative score, biasing the test against the truthful (and more false positives), which is why some of our MGQT's are affectionately called "DI tests."

IP: Logged

dayok
Member
posted 03-11-2005 03:15 AM     Click Here to See the Profile for dayok   Click Here to Email dayok     Edit/Delete Message
Chip:
Amen men !!! i so agree with you .

Dario Karmel

IP: Logged

J L Ogilvie
Moderator
posted 03-14-2005 03:17 PM     Click Here to See the Profile for J L Ogilvie   Click Here to Email J L Ogilvie     Edit/Delete Message
We often get hung up on "validity".

I think most of can agree that if we make minor changes in a technique it doesn't really change anything. Like having a control on the right or left of a relevant and things like that. However, if we do that, we can't say what the validity is unless we do the research.

Validity in a research setting means, high quality research, such as at a University, having someone replicate the research with out comes that are statistcally close and having it reviewed by people in the profession that can't disprove the research.

Just because something is not validated does not mean it doesn't work. It just means that no one has actually completed the research, replication and peer review.

I know we have discussed this in the past and there are several techniques used that have not been validated but work just fine. The problem with using them is that if questioned in court you can't defend it as well as a technique that has been validated.

Most adaptations of techniques would not make any statistically significant difference in the validity versus the original technique. You just can't prove that scientifically without the research to back it up.

Jack

------------------

IP: Logged

Barry C
Member
posted 03-14-2005 04:26 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
For the most part, I agree. I simply believe the CQT - virtually any CQT - has been so thoroughly tested in the means you state that we can scientifically defend any of them as valid. Again, if you ask Dr. Honts - where a lot of the research comes from - he'll agree. His chapter in Kleiner applies to ALL CQTs, even those that didn't make Don's list.

As to where a CQ vs. an RQ goes, there is research showing that asking a neutral question before a relevant question results in a lower score, which means a bias against the truthful person. The result might be a valid test, but a test that is less reliable than one in which a CQ is asked (instead of a neutral) before a relevant question.

Keep in mind "valid" does not mean optimal. The voice stress people run a valid test. The problem is it is only accurate 36% of the time. Running a valid test isn't everything though. What if you run a valid test format such as the Federal ZCT, but you don't know how to develop CQs? Is the test valid or even useful?

What if you are lousy at scoring? I've heard some examiners were offended at the need to be tested for scoring proficiency (at an 86% minimum) in order to be certified to run Marin tests under the new ASTM standard. That figure was arrived at because that was the average in the NAS study. That is, half of the examiners were making correct calls 86% of the time (or less). (About 1/4 were making correct calls less than 81% of the time.) Why are we so worried about format accuracies - that generally exceed 90% - when many of us can't score them correctly at that rate?

I think we overdo this validity thing. When do we draw the line? If I step off my bottom step (about 8"), gravity will pull me down, so I can scientifically test height / gravity issue rather easily. But what if I step off the top deck (about 3")? I tried it; gravity still pulls me down. What if I step off my roof? I've never tried it, so can I say that I don't know (scientifically) what would happen, i.e., gravity would win again? You might say you've fallen off your roof, so you can prove that one too, but how do I know that would happen if I stepped off your roof? The only way to know (scientifically) is for me to do it. (I know, some of you are willing let me up on your roofs and give me a push right now, huh?)

At what point do we accept that whether I step off a step, a chair, my roof, the fourth floor window, etc., that gravity is going to win every time? I'm willing to accept gravity is going to win and avoid the extra (and not all that helpful) testing of the issue. We've tested the principles of polygraph enough to know that ANY well-developed CQT test is valid test. We need to agree on that if we want to move ahead. If not, an attorney can just call two of us in to fight against each other and let us do our own damage.

IP: Logged

J L Ogilvie
Moderator
posted 03-15-2005 03:16 PM     Click Here to See the Profile for J L Ogilvie   Click Here to Email J L Ogilvie     Edit/Delete Message
Unfortunately I think we do have to draw a line somewhere. If not why bother with standards at all?

If everyone would always run a proper test using a proven testing technique we wouldn't be having any of these discussions. We all know that is not the case.

I agree that a well run CQT does not necessarily have to be scientifically validated to work. Indeed several good ones are not as yet, but if you were asked to review a set of charts and you couldn't figure out what format was used what would you do? If you asked the examiner what format he used and he said "none really, I just put some relevant and comparison questions together and asked them" what would you do?

The idea behind research and specific types of techniques are to be able to quantify the effectiveness of a technique. It is not to show another technique does not work. Minor differences between techniques is probably not going to change the overall effectivness but it could. The point is we don't know for sure what effect it will have, if any, which makes it harder to defend if necessary.

The most important thing in a test is that it is done properly. If someone can't or doesn't set controls well they are hurting their chances of a reliable out come. If they set controls improperly using a technique of their own design with no research they would be compounding the problem wouldn't they?

Most of us have the ability to look at a set of charts and give an opinion but if we didn't watch a video to see if they set the comparisons properly or if they gave special emphasis to any of the questions during data collection can we really be sure of the outcome?

Forget the list for now, there are several good techniques not on the list. If it was just us professionals striving to do a good job it wouldn't matter. Unfortunately their are a lot of unprofessionals out there and millions of people that have no earthly idea what a good Polygraph is. Thats who standards and scientific validation are for. Not the examiners who constantly strive to do the best possible test everytime.

Jack

------------------

[This message has been edited by J L Ogilvie (edited 03-15-2005).]

IP: Logged

Barry C
Member
posted 03-15-2005 08:29 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
I agree we must draw the line somewhere, and there must be standards. What I'm saying, and I think you'd agree, is that those standards should be based on research. Rather than fighting over issues such as is a Backster test better than a Utah test, etc., let's look at what we know works and what we can defend.

With that said, it's a no-brainer you are going to be able to defend the - or should I say a since there are a few - Utah test over a Matte test, for example because there is so much research to support the former. When it comes to utility though - which is what most of us are interested in - then the window dressing doesn't matter much.

If I were to run a test for court, sure I'd much rather run a "valid" format, but I'm not going to go too crazy over a recognized technique taught at XYZ school that might not make the list so long as it's acceptable.

What do you mean by "special emphasis" to certain questions during the data collection? The Utah test calls for stimulating the examinee to the controls between charts. They've got the research to show it works. The accuracy rate is about 95.5%, and the problem in the resrearch is false positives - not false negatives as we would expect.

When you mention unprofessionals, that's my sore spot - and my point. I'll take your Matte test, Bi-Zone, AF MGQT, etc., over their Utah or any other "valid" test any day. Why? Because those things we professionals do matter more than plugging questions into a template. Any bozo can do that, and you make the point well: Sadly there are too many doing so.

IP: Logged

Ted Todd
Member
posted 03-15-2005 10:00 PM     Click Here to See the Profile for Ted Todd     Edit/Delete Message
28 Posts to this topic! Yes folks, I think that might just be a new Polygraph Place record!

Agree or disagree, this thread has been very informative!

Ted

IP: Logged

Barry C
Member
posted 03-16-2005 08:42 AM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Yes, Ted, it has been informative, and I was thinking the same thing about the length.

As far as agreeing or disagreeing goes, as I looked it over, I realized sometimes I don't even agree with myself, but then again, that's not all that unusual.

IP: Logged

J L Ogilvie
Moderator
posted 03-16-2005 02:48 PM     Click Here to See the Profile for J L Ogilvie   Click Here to Email J L Ogilvie     Edit/Delete Message
Barry, I wrote a rather long reply and then must have lost it somewhere. At least it didn't get posted.

This one will be short. When I said "special emphasis" I meant an examiner using voice inflection on certain questions.

For the record I am not knocking any technique. I used Utah as my primary for years and did stim the controls between charts as insrtucted.

I had alot of good stuff in the post I thought I had done earlier but have neither the time or energy to try to repeat it at this time. Sorry.

Jack

IP: Logged

Barry C
Member
posted 03-16-2005 03:53 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Jack,

I figured that's what you meant.

I too have come up with some long posts (imagine that) that have found their ways to forum limbo. May they rest in peace.

I'm not going going to any of the national seminars this year, but will I be able to put faces to names here (in Portland, Maine area) for AAPP in 2006?

IP: Logged

J L Ogilvie
Moderator
posted 03-18-2005 08:27 AM     Click Here to See the Profile for J L Ogilvie   Click Here to Email J L Ogilvie     Edit/Delete Message

Some of the faces you don't want to see. Scary, just kidding. I, on the other hand am quite handsome.

Sorry to miss you this year at AAPP but hopefully next year.

Jack

------------------

IP: Logged

All times are PT (US)

next newest topic | next oldest topic

Administrative Options: Close Topic | Archive/Move | Delete Topic
Post New Topic  Post A Reply
Hop to:

Contact Us | The Polygraph Place

copyright 1999-2003. WordNet Solutions. All Rights Reserved

Powered by: Ultimate Bulletin Board, Version 5.39c
© Infopop Corporation (formerly Madrona Park, Inc.), 1998 - 1999.